Image Processing and Image Analysis|21 Article(s)
Complex transmission matrix retrieval for a highly scattering medium via regional phase differentiation
Qiaozhi He, Rongjun Shao, Yuan Qu, Linxian Liu, Chunxu Ding, and Jiamiao Yang
Accurately measuring the complex transmission matrix (CTM) of the scattering medium (SM) holds critical significance for applications in anti-scattering optical imaging, phototherapy, and optical neural networks. Non-interferometric approaches, utilizing phase retrieval algorithms, can robustly extract the CTM from the speckle patterns formed by multiple probing fields traversing the SM. However, in cases where an amplitude-type spatial light modulator is employed for probing field modulation, the absence of phase control frequently results in the convergence towards a local optimum, undermining the measurement accuracy. Here, we propose a high-accuracy CTM retrieval (CTMR) approach based on regional phase differentiation (RPD). It incorporates a sequence of additional phase masks into the probing fields, imposing a priori constraints on the phase retrieval algorithms. By distinguishing the variance of speckle patterns produced by different phase masks, the RPD-CTMR can effectively direct the algorithm towards a solution that closely approximates the CTM of the SM. We built a prototype of a digital micromirror device modulated RPD-CTMR. By accurately measuring the CTM of diffusers, we achieved an enhancement in the peak-to-background ratio of anti-scattering focusing by a factor of 3.6, alongside a reduction in the bit error rate of anti-scattering image transmission by a factor of 24. Our proposed approach aims to facilitate precise modulation of scattered optical fields, thereby fostering advancements in diverse fields including high-resolution microscopy, biomedical optical imaging, and optical communications.
Photonics Research
  • Publication Date: Apr. 08, 2024
  • Vol. 12, Issue 5, 876 (2024)
Learning the imaging mechanism directly from optical microscopy observations
Ze-Hao Wang, Long-Kun Shan, Tong-Tian Weng, Tian-Long Chen, Xiang-Dong Chen, Zhang-Yang Wang, Guang-Can Guo, and Fang-Wen Sun
The optical microscopy image plays an important role in scientific research through the direct visualization of the nanoworld, where the imaging mechanism is described as the convolution of the point spread function (PSF) and emitters. Based on a priori knowledge of the PSF or equivalent PSF, it is possible to achieve more precise exploration of the nanoworld. However, it is an outstanding challenge to directly extract the PSF from microscopy images. Here, with the help of self-supervised learning, we propose a physics-informed masked autoencoder (PiMAE) that enables a learnable estimation of the PSF and emitters directly from the raw microscopy images. We demonstrate our method in synthetic data and real-world experiments with significant accuracy and noise robustness. PiMAE outperforms DeepSTORM and the Richardson–Lucy algorithm in synthetic data tasks with an average improvement of 19.6% and 50.7% (35 tasks), respectively, as measured by the normalized root mean square error (NRMSE) metric. This is achieved without prior knowledge of the PSF, in contrast to the supervised approach used by DeepSTORM and the known PSF assumption in the Richardson–Lucy algorithm. Our method, PiMAE, provides a feasible scheme for achieving the hidden imaging mechanism in optical microscopy and has the potential to learn hidden mechanisms in many more systems.
Photonics Research
  • Publication Date: Dec. 08, 2023
  • Vol. 12, Issue 1, 7 (2024)
Passive imaging through dense scattering media
Yaoming Bian, Fei Wang, Yuanzhe Wang, Zhenfeng Fu, Haishan Liu, Haiming Yuan, and Guohai Situ
Imaging through non-static and optically thick scattering media such as dense fog, heavy smoke, and turbid water is crucial in various applications. However, most existing methods rely on either active and coherent light illumination, or image priors, preventing their application in situations where only passive illumination is possible. In this study we present a universal passive method for imaging through dense scattering media that does not depend on any prior information. Combining the selection of small-angle components out of the incoming information-carrying scattering light and image enhancement algorithm that incorporates time-domain minimum filtering and denoising, we show that the proposed method can dramatically improve the signal-to-interference ratio and contrast of the raw camera image in outfield experiments.
Photonics Research
  • Publication Date: Dec. 22, 2023
  • Vol. 12, Issue 1, 134 (2024)
Learning-based super-resolution interpolation for sub-Nyquist sampled laser speckles
Huanhao Li, Zhipeng Yu, Qi Zhao, Yunqi Luo, Shengfu Cheng, Tianting Zhong, Chi Man Woo, Honglin Liu, Lihong V. Wang, Yuanjin Zheng, and Puxiang Lai
Information retrieval from visually random optical speckle patterns is desired in many scenarios yet considered challenging. It requires accurate understanding or mapping of the multiple scattering process, or reliable capability to reverse or compensate for the scattering-induced phase distortions. In whatever situation, effective resolving and digitization of speckle patterns are necessary. Nevertheless, on some occasions, to increase the acquisition speed and/or signal-to-noise ratio (SNR), speckles captured by cameras are inevitably sampled in the sub-Nyquist domain via pixel binning (one camera pixel contains multiple speckle grains) due to finite size or limited bandwidth of photosensors. Such a down-sampling process is irreversible; it undermines the fine structures of speckle grains and hence the encoded information, preventing successful information extraction. To retrace the lost information, super-resolution interpolation for such sub-Nyquist sampled speckles is needed. In this work, a deep neural network, namely SpkSRNet, is proposed to effectively up sample speckles that are sampled below 1/10 of the Nyquist criterion to well-resolved ones that not only resemble the comprehensive morphology of original speckles (decompose multiple speckle grains from one camera pixel) but also recover the lost complex information (human face in this study) with high fidelity under normal- and low-light conditions, which is impossible with classic interpolation methods. These successful speckle super-resolution interpolation demonstrations are essentially enabled by the strong implicit correlation among speckle grains, which is non-quantifiable but could be discovered by the well-trained network. With further engineering, the proposed learning platform may benefit many scenarios that are physically inaccessible, enabling fast acquisition of speckles with sufficient SNR and opening up new avenues for seeing big and seeing clearly simultaneously in complex scenarios.
Photonics Research
  • Publication Date: Mar. 30, 2023
  • Vol. 11, Issue 4, 631 (2023)
Deep coded exposure: end-to-end co-optimization of flutter shutter and deblurring processing for general motion blur removal
Zhihong Zhang, Kaiming Dong, Jinli Suo, and Qionghai Dai
Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues, we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter’s encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method.
Photonics Research
  • Publication Date: Sep. 27, 2023
  • Vol. 11, Issue 10, 1678 (2023)
Snapshot spectral compressive imaging reconstruction using convolution and contextual Transformer
Lishun Wang, Zongliang Wu, Yong Zhong, and Xin Yuan
Spectral compressive imaging (SCI) is able to encode a high-dimensional hyperspectral image into a two-dimensional snapshot measurement, and then use algorithms to reconstruct the spatio-spectral data-cube. At present, the main bottleneck of SCI is the reconstruction algorithm, and state-of-the-art (SOTA) reconstruction methods generally face problems of long reconstruction times and/or poor detail recovery. In this paper, we propose a hybrid network module, namely, a convolution and contextual Transformer (CCoT) block, that can simultaneously acquire the inductive bias ability of convolution and the powerful modeling ability of Transformer, which is conducive to improving the quality of reconstruction to restore fine details. We integrate the proposed CCoT block into a physics-driven deep unfolding framework based on the generalized alternating projection (GAP) algorithm, and further propose the GAP-CCoT network. Finally, we apply the GAP-CCoT algorithm to SCI reconstruction. Through experiments on a large amount of synthetic data and real data, our proposed model achieves higher reconstruction quality (>2 dB in peak signal-to-noise ratio on simulated benchmark datasets) and a shorter running time than existing SOTA algorithms by a large margin. The code and models are publicly available at https://github.com/ucaswangls/GAP-CCoT.
Photonics Research
  • Publication Date: Jul. 22, 2022
  • Vol. 10, Issue 8, 1848 (2022)
Fast and robust phase retrieval for masked coherent diffractive imaging
Li Song, and Edmund Y. Lam
Conventional phase retrieval algorithms for coherent diffractive imaging (CDI) require many iterations to deliver reasonable results, even using a known mask as a strong constraint in the imaging setup, an approach known as masked CDI. This paper proposes a fast and robust phase retrieval method for masked CDI based on the alternating direction method of multipliers (ADMM). We propose a plug-and-play ADMM to incorporate the prior knowledge of the mask, but note that commonly used denoisers are not suitable as regularizers for complex-valued latent images directly. Therefore, we develop a regularizer based on the structure tensor and Harris corner detector. Compared with conventional phase retrieval methods, our technique can achieve comparable reconstruction results with less time for the masked CDI. Moreover, validation experiments on real in situ CDI data for both intensity and phase objects show that our approach is more than 100 times faster than the baseline method to reconstruct one complex-valued image, making it possible to be used in challenging situations, such as imaging dynamic objects. Furthermore, phase retrieval results for single diffraction patterns show the robustness of the proposed ADMM.
Photonics Research
  • Publication Date: Mar. 01, 2022
  • Vol. 10, Issue 3, 03000758 (2022)
Antibunching and superbunching photon correlations in pseudo-natural light
Zhiyuan Ye, Hai-Bo Wang, Jun Xiong, and Kaige Wang
Since Hanbury Brown and Twiss revealed the photon bunching effect of a thermal light source in 1956, almost all studies in correlation optics have been based on light’s intensity fluctuation, regardless of fact that the polarization fluctuation is a basic attribute of natural light. In this work, we uncover the veil of the polarization fluctuation and corresponding photon correlations by proposing a new light source model, termed pseudo-natural light, embodying both intensity and polarization fluctuations. Unexpectedly, the strong antibunching and superbunching effects can be simultaneously realized in such a new source, whose second-order correlation coefficient g(2) can be continuously modulated across 1. For the symmetric Bernoulli distribution of the polarization fluctuation, particularly, g(2) can be in principle from 0 to unlimitedly large. In pseudo-natural light, while the bunching effects of both intensity and polarization fluctuations enhance the bunching to superbunching photon correlation, the antibunching correlation of the polarization fluctuation can also be extracted through the procedure of division operation in the experiment. The antibunching effect and the combination with the bunching one will arouse new applications in quantum imaging. As heuristic examples, we carry out high-quality positive or negative ghost imaging, and devise high-efficiency polarization-sensitive and edge-enhanced imaging. This work, therefore, sheds light on the development of multiple and broad correlation functions for natural light.
Photonics Research
  • Publication Date: Feb. 22, 2022
  • Vol. 10, Issue 3, 03000668 (2022)
Two-stage matrix-assisted glare suppression at a large scale|Editors' Pick
Daixuan Wu, Jiawei Luo, Zhibing Lu, Hanpeng Liang, Yuecheng Shen, and Zhaohui Li
Scattering-induced glares hinder the detection of weak objects in various scenarios. Recent advances in wavefront shaping show one can not only enhance intensities through constructive interference but also suppress glares within a targeted region via destructive interference. However, due to the lack of a physical model and mathematical guidance, existing approaches have generally adopted a feedback-based scheme, which requires time-consuming hardware iteration. Moreover, glare suppression with up to tens of speckles was demonstrated by controlling thousands of independent elements. Here, we reported the development of a method named two-stage matrix-assisted glare suppression (TAGS), which is capable of suppressing glares at a large scale without triggering time-consuming hardware iteration. By using the TAGS, we experimentally darkened an area containing 100 speckles by controlling only 100 independent elements, achieving an average intensity of only 0.11 of the original value. It is also noticeable that the TAGS is computationally efficient, which only takes 0.35 s to retrieve the matrix and 0.11 s to synthesize the wavefront. With the same number of independent controls, further demonstrations on suppressing larger scales up to 256 speckles were also reported. We envision that the superior performance of the TAGS at a large scale can be beneficial to a variety of demanding imaging tasks under a scattering environment.
Photonics Research
  • Publication Date: Nov. 11, 2022
  • Vol. 10, Issue 12, 2693 (2022)
Optimize performance of a diffractive neural network by controlling the Fresnel number
Minjia Zheng, Lei Shi, and Jian Zi
Photonics Research
  • Publication Date: Nov. 01, 2022
  • Vol. 10, Issue 11, 2667 (2022)
Topics